ai use
UK exposed to 'serious harm' by failure to tackle AI risks, MPs warn
More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. UK exposed to'serious harm' by failure to tackle AI risks, MPs warn Consumers and the UK financial system are being exposed to "serious harm" by the failure of government and the Bank of England to get a grip on the risks posed by artificial intelligence, an influential parliamentary committee has warned. That is despite looming concerns over how the burgeoning technology could disadvantage already vulnerable consumers, or even trigger a financial crisis, if AI-led firms end up making similar financial decisions in response to economic shocks. More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters.
- Europe > United Kingdom > England (0.66)
- North America > United States (0.15)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Banking & Finance (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.77)
Is AI taking the fun out of fantasy football?
Is AI taking the fun out of fantasy football? For years, fantasy football has given every armchair manager the space to back up claims they could do a better job than the real thing. Whether you're competing against workmates, family members or strangers, the ability to pull together your own dream team is irresistible to millions of football fans. The competitive pastime has spawned a whole industry of content creators offering weekly tips for anyone looking to gain an edge as they sift through stats and manage transfers. Recently, more players have been turning to Artificial Intelligence (AI) tools for advice - but not everyone agrees they have a place in the virtual dugout.
- North America > United States (0.15)
- North America > Central America (0.15)
- Oceania > Australia (0.05)
- (15 more...)
- Leisure & Entertainment > Sports > Football (0.81)
- Leisure & Entertainment > Games > Computer Games (0.61)
- Leisure & Entertainment > Sports > Soccer (0.49)
Are these AI prompts damaging your thinking skills?
Are these AI prompts damaging your thinking skills? What was the last thing you asked an AI chatbot to do for you? Maybe you asked it for an essay structure to help answer a tricky question, provide an insightful analysis of a chunky data set, or to check if your cover letter matches the job description. Some experts worry that outsourcing these kinds of tasks means your brain is working less - and could even be harming your critical thinking and problem-solving skills. Earlier this year, the Massachusetts Institute of Technology (MIT) published a study showing that people who used ChatGPT to write essays showed less activity in brain networks associated with cognitive processing while undertaking the exercise.
- North America > United States > Massachusetts (0.25)
- North America > Central America (0.15)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- (16 more...)
Artificial Intelligence Competence of K-12 Students Shapes Their AI Risk Perception: A Co-occurrence Network Analysis
Heilala, Ville, Sikström, Pieta, Setälä, Mika, Kärkkäinen, Tommi
As artificial intelligence (AI) becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. This research aimed to examine the relationships between perceived AI competence and risks among Finnish K-12 upper secondary students (n = 163) by utilizing a co-occurrence analysis. Students reported their self-perceived AI competence and concerns related to AI across systemic, institutional, and personal domains. The findings showed that students with lower competence emphasized personal and learning-related risks, such as reduced creativity, lack of critical thinking, and misuse, whereas higher-competence students focused more on systemic and institutional risks, including bias, inaccuracy, and cheating. These differences suggest that students' self-reported AI competence is related to how they evaluate both the risks and opportunities associated with artificial intelligence in education (AIED). The results of this study highlight the need for educational institutions to incorporate AI literacy into their curricula, provide teacher guidance, and inform policy development to ensure personalized opportunities for utilization and equitable integration of AI into K-12 education.
- North America > United States > District of Columbia > Washington (0.05)
- Europe > Finland > Central Finland > Jyväskylä (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (0.95)
- Information Technology > Artificial Intelligence > Natural Language (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Fashion house Valentino criticised over 'disturbing' AI handbag ads
Italian luxury fashion house Valentino is facing criticism after posting disturbing adverts made using artificial intelligence (AI) for one of its luxury handbags online. The brand announced a collaboration with digital artists as part of what it dubbed a digital creative project promoting its new DeVain handbag. But an AI-generated advert it posted on Instagram has been met with intense criticism from fans, who called the visuals - and use of AI - sloppy and sad. The BBC has approached Valentino for comment. The Instagram post promoting the handbag, which has a label to say it was made using AI, shows a surreal collage of models spliced between Valentino logos and its DeVain bag.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (14 more...)
- Leisure & Entertainment (1.00)
- Textiles, Apparel & Luxury Goods (0.89)
Barriers to AI Adoption: Image Concerns at Work
Concerns about how workers are perceived can deter effective collaboration with artificial intelligence (AI). In a field experiment on a large online labor market, I hired 450 U.S.-based remote workers to complete an image-categorization job assisted by AI recommendations. Workers were incentivized by the prospect of a contract extension based on an HR evaluator's feedback. I find that workers adopt AI recommendations at lower rates when their reliance on AI is visible to the evaluator, resulting in a measurable decline in task performance. The effects are present despite a conservative design in which workers know that the evaluator is explicitly instructed to assess expected accuracy on the same AI-assisted task. This reduction in AI reliance persists even when the evaluator is reassured about workers' strong performance history on the platform, underscoring how difficult these concerns are to alleviate. Leveraging the platform's public feedback feature, I introduce a novel incentive-compatible elicitation method showing that workers fear heavy reliance on AI signals a lack of confidence in their own judgment, a trait they view as essential when collaborating with AI.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Mexico (0.04)
- Asia > Pakistan (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance > Economy (0.48)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.34)
AI use in American newspapers is widespread, uneven, and rarely disclosed
Russell, Jenna, Karpinska, Marzena, Akinode, Destiny, Thai, Katherine, Emi, Bradley, Spero, Max, Iyyer, Mohit
AI is rapidly transforming journalism, but the extent of its use in published newspaper articles remains unclear. We address this gap by auditing a large-scale dataset of 186K articles from online editions of 1.5K American newspapers published in the summer of 2025. Using Pangram, a state-of-the-art AI detector, we discover that approximately 9% of newly-published articles are either partially or fully AI-generated. This AI use is unevenly distributed, appearing more frequently in smaller, local outlets, in specific topics such as weather and technology, and within certain ownership groups. We also analyze 45K opinion pieces from Washington Post, New York Times, and Wall Street Journal, finding that they are 6.4 times more likely to contain AI-generated content than news articles from the same publications, with many AI-flagged op-eds authored by prominent public figures. Despite this prevalence, we find that AI use is rarely disclosed: a manual audit of 100 AI-flagged articles found only five disclosures of AI use. Overall, our audit highlights the immediate need for greater transparency and updated editorial standards regarding the use of AI in journalism to maintain public trust.
- South America > Guyana (0.28)
- Europe > Austria > Vienna (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- (22 more...)
- Research Report > New Finding (1.00)
- Personal (1.00)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Communications > Social Media (0.67)
The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice
It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice. This enthusiasm assumes lawyers can effectively manage AI's risks. Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited. This paper argues that a new paradigm is needed to evaluate AI use in practice, given (a) AI's disconnection from reality and its lack of transparency, and (b) lawyers' paramount duties like honesty, integrity, and not to mislead the court. It presents an alternative model of AI use in practice that more holistically reflects these features (the verification-value paradox). That paradox suggests increases in efficiency from AI use in legal practice will be met by a correspondingly greater imperative to manually verify any outputs of that use, rendering the net value of AI use often negligible to lawyers. The paper then sets out the paradox's implications for legal practice and legal education, including for AI use but also the values that the paradox suggests should undergird legal practice: fidelity to the truth and civic responsibility.
- North America > United States > California (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Oceania > Australia > New South Wales (0.04)
- (18 more...)
- Research Report (1.00)
- Overview (1.00)
- Law > Litigation (1.00)
- Law > Government & the Courts (0.93)
- Education > Educational Setting > Higher Education (0.69)
- (2 more...)
Gen AI in Proof-based Math Courses: A Pilot Study
Klawa, Hannah, Rajpal, Shraddha, Thomas, Cigole
With the rapid rise of generative AI in higher education and the unreliability of current AI detection tools, developing policies that encourage student learning and critical thinking has become increasingly important. This study examines student use and perceptions of generative AI across three proof-based undergraduate mathematics courses: a first-semester abstract algebra course, a topology course and a second-semester abstract algebra course. In each case, course policy permitted some use of generative AI. Drawing on survey responses and student interviews, we analyze how students engaged with AI tools, their perceptions of generative AI's usefulness and limitations, and what implications these perceptions hold for teaching proof-based mathematics. We conclude by discussing future considerations for integrating generative AI into proof-based mathematics instruction.
- North America > United States > South Carolina (0.04)
- North America > United States > Maine > Androscoggin County > Lewiston (0.04)
- North America > United States > Indiana > Wayne County > Richmond (0.04)
- Questionnaire & Opinion Survey (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Research Report > Experimental Study (0.88)
- Education > Educational Setting > Higher Education (1.00)
- Education > Curriculum (0.89)
Imposing AI: Deceptive design patterns against sustainability
Beignon, Anaëlle, Thibault, Thomas, Maudet, Nolwenn
Generative AI is being massively deployed in digital services, at a scale that will result in significant environmental harm. We document how tech companies are transforming established user interfaces to impose AI use and show how and to what extent these strategies fit within established deceptive pattern categories. We identify two main design strategies that are implemented to impose AI use in both personal and professional contexts: imposing AI features in interfaces at the expense of existing non-AI features and promoting narratives about AI that make it harder to resist using it. We discuss opportunities for regulating the imposed adoption of AI features, which would inevitably lead to negative environmental effects.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > France > Grand Est > Bas-Rhin > Strasbourg (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (2 more...)
- Law (0.94)
- Information Technology > Services (0.94)
- Energy > Power Industry (0.67)